Goto

Collaborating Authors

 ai safety summit


TechScape: The people charged with making sure AI doesn't destroy humanity have left the building

The Guardian

I'm in Seoul for the International AI summit, the half-year follow-up to last year's Bletchley Park AI safety summit (the full sequel will be in Paris this autumn). While you read this, the first day of events will have just wrapped up – though, in keeping with the reduced fuss this time round, that was merely a "virtual" leaders' meeting. When the date was set for this summit – alarmingly late in the day for, say, a journalist with two preschool children for whom four days away from home is a juggling act – it was clear that there would be a lot to cover. The inaugural AI safety summit at Bletchley Park in the UK last year announced an international testing framework for AI models, after calls … for a six-month pause in development of powerful systems. There has been no pause. The Bletchley declaration, signed by UK, US, EU, China and others, hailed the "enormous global opportunities" from AI but also warned of its potential for causing "catastrophic" harm.


Things to know about an AI safety summit in Seoul

FOX News

Fox News Flash top headlines are here. Check out what's clicking on Foxnews.com. South Korea is set to host a mini-summit this week on risks and regulation of artificial intelligence, following up on an inaugural AI safety meeting in Britain last year that drew a diverse crowd of tech luminaries, researchers and officials. The gathering in Seoul aims to build on work started at the U.K. meeting on reining in threats posed by cutting edge artificial intelligence systems. Here is what you need to know about the AI Seoul Summit and AI safety issues.


U.S., U.K. Announce Partnership to Safety Test AI Models

TIME - Tech

The U.K. and U.S. governments announced Monday they will work together in safety testing the most powerful artificial intelligence models. An agreement, signed by Michelle Donelan, the U.K. Secretary of State for Science, Innovation and Technology, and U.S. Secretary of Commerce Gina Raimondo, sets out a plan for collaboration between the two governments. "I think of [the agreement] as marking the next chapter in our journey on AI safety, working hand in glove with the United States government," Donelan told TIME in an interview at the British Embassy in Washington, D.C. on Monday. "I see the role of the United States and the U.K. as being the real driving force in what will become a network of institutes eventually." The U.K. and U.S. AI Safety Institutes were established just one day apart, around the inaugural AI Safety Summit hosted by the U.K. government at Bletchley Park in November 2023.


Biden Economic Adviser Elizabeth Kelly Picked to Lead AI Safety Testing Body

TIME - Tech

Elizabeth Kelly, formerly an economic policy adviser to President Joe Biden, has been named as director of the newly formed U.S. Artificial Intelligence Safety Institute (USAISI), U.S. Commerce Secretary Gina Raimondo announced Wednesday. "For the United States to lead the world in the development of safe, responsible AI, we need the brightest minds at the helm," said Raimondo. "Thanks to President Biden's leadership, we're in a position of power to meet the challenges posed by AI, while fostering America's greatest strength: innovation." Kelly has previously contributed to the Biden Administration's efforts to regulate AI with the AI Executive Order, which an Administration official tells TIME she was involved in the development of from the beginning. Kelly was "a driving force behind the domestic components of the AI executive order, spearheading efforts to promote competition, protect privacy, and support workers and consumers, and helped lead Administration engagement with allies and partners on AI governance," according to a press release announcing her appointment. Read More: Why Biden's AI Executive Order Only Goes So Far Previously, Kelly was special assistant to the President for economic policy at the White House National Economic Council.


Unmasking the Shadows of AI: Investigating Deceptive Capabilities in Large Language Models

Guo, Linge

arXiv.org Artificial Intelligence

This research critically navigates the intricate landscape of AI deception, concentrating on deceptive behaviours of Large Language Models (LLMs). My objective is to elucidate this issue, examine the discourse surrounding it, and subsequently delve into its categorization and ramifications. The essay initiates with an evaluation of the AI Safety Summit 2023 (ASS) and introduction of LLMs, emphasising multidimensional biases that underlie their deceptive behaviours.The literature review covers four types of deception categorised: Strategic deception, Imitation, Sycophancy, and Unfaithful Reasoning, along with the social implications and risks they entail. Lastly, I take an evaluative stance on various aspects related to navigating the persistent challenges of the deceptive AI. This encompasses considerations of international collaborative governance, the reconfigured engagement of individuals with AI, proposal of practical adjustments, and specific elements of digital education.


OpenAI became the nexus of the technology world in 2023

Engadget

Let's take a look at how OpenAI and its chatbot have impacted consumer electronics in 2023 and where they might lead the industry in the new year. "Meteoric" doesn't do justice to OpenAI's rise this year. The company released ChatGPT on November 30, 2022. Within five days, the program had passed 1 million users; by January, 100 million people a month were logging on to use it. It took Facebook four and a half years to reach those sorts of engagement numbers.


The 3 Most Important AI Policy Milestones of 2023

TIME - Tech

In November 2022, OpenAI launched ChatGPT. Within five days, it had over a million users. Six months later, the CEOs of the world's leading AI companies, and hundreds of researchers and experts, signed a short statement warning that mitigating the risk of extinction from AI should be a global priority on the scale of preventing nuclear war. AI's rapid technological progress and the dire warnings from its creators provoked a reaction in capitals around the world. But as lawmakers and regulators rushed to write the rules charting AI's future, many warned their efforts were insufficient to mitigate the risks from, and capitalize on the benefits of AI.


Google says new AI model Gemini outperforms ChatGPT in most tests

The Guardian

Google has unveiled a new artificial intelligence model that it claims outperforms ChatGPT in most tests and displays "advanced reasoning" across multiple formats, including an ability to view and mark a student's physics homework. The model, called Gemini, is the first to be announced since last month's global AI safety summit where tech firms agreed to collaborate with governments on testing advanced systems before and after their release. Google said it was in discussions with the UK's newly formed AI Safety Institute over testing Gemini's most powerful version, which will be released next year. The model comes in three versions and is "multimodal", which means it can comprehend text, audio, images, video and computer code simultaneously. Gemini, which will be folded into Google products including its search engine, is being released initially in more than 170 countries including the US on Wednesday in the form of an upgrade to Google's chatbot Bard.


France to host next AI safety summit as European nations jockey for tech leadership

FOX News

AI expert Marva Bailer tells Fox News Digital how the open availability of artificial intelligence can have negative impacts and talks potential federal legislation to control it. European nations continue to jockey for leadership on artificial intelligence (AI), with Paris announcing it will host the next safety summit shortly after Britain hosted the first one. "The first edition of the Artificial Intelligence Security Summit, organized by the United Kingdom, provides an opportunity to develop international cooperation in the field of security, a crucial issue for the years to come. It was, therefore, natural for France to host the second edition of this summit," French Minister Delegate for the Digital Economy Jean-Noël Barrot said in a press release. The future of AI remains up for grabs, with many nations trying to position themselves at the forefront of the race.


Multi-nation agreement seeks cooperation on development of 'frontier' AI tech

FOX News

Kara Frederick, tech director at the Heritage Foundation, discusses the need for regulations on artificial intelligence as lawmakers and tech titans discuss the potential risks. The U.S. and other countries signed an agreement to collaborate and communicate on "frontier" artificial intelligence (AI) that will aim to limit the risks presented by the technology in the coming years. "We encourage all relevant actors to provide context-appropriate transparency and accountability on their plans to measure, monitor and mitigate potentially harmful capabilities and the associated effects that may emerge, in particular to prevent misuse and issues of control, and the amplification of other risks," the Bletchley Declaration, signed by 28 countries, including the U.S., China and members of the European Union. The international community has wrangled with the problem of AI, trying to balance the obvious and emerging risks associated with such advanced technology against what Britain's King Charles III called the "untold benefits." The Bletchley Declaration therefore lays out two key points: "identifying AI safety risks" and "building respective risk-based policies across our countries to ensure safety in light of such risks."